insider trading
Female Looksmaxxer Alorah Ziva Is Suing Clavicular for Alleged Battery
Aleksandra Mendoza, aka Alorah Ziva, alleges that the 20-year-old influencer injected her with drugs on a livestream and had nonconsensual sex with her while she was underage. An 18-year-old woman who promotes herself as the "#1 female looksmaxxer" is suing the highly controversial streamer Braden Eric Peters, aka Clavicular, for fraud, battery, and alleged sexual assault. In the suit, which was filed in Miami-Dade County court and obtained by WIRED, Aleksandra Mendoza, who goes by the name @zahloria, or Alorah Ziva, on Instagram, alleges that she first encountered Peters in May 2025, when she was just 16 years old. According to the complaint, Peters promised Mendoza he could make her "the female face of looksmaxxing," the online trend of using surgery or drugs to enhance one's facial features. Eager to grow her social media following, Mendoza agreed to make four looksmaxxing videos for Peters in exchange for a $1,000 payment, court documents say.
- Information Technology > Communications > Social Media (1.00)
- Information Technology > Artificial Intelligence > Vision > Face Recognition (0.35)
US Senate Candidate Caught Insider Trading on Kalshi Says He Did It on Purpose
Mark Moran, an underdog Senate candidate from Virginia, claims he wanted to get caught violating the prediction market platform's rules. Kalshi announced Wednesday that it had taken action against three US politicians for violating the prediction market platform's rules on insider trading. One of the candidates, Mark Moran, a former investment banker and contestant on the reality dating show, is running a long-shot campaign for US Senate in Virginia against incumbent Mark Warner. According to Moran, getting caught was actually his plan all along: "I bet $100 on myself, not denying that, I did do it," he tells WIRED. "I wanted to see if they would enforce it."
- North America > United States > Virginia (0.46)
- North America > United States > California (0.18)
- North America > United States > New York (0.05)
- (5 more...)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Trading (1.00)
New York Bans Government Employees from Insider Trading on Prediction Markets
A new executive order seen by WIRED prohibits New York state employees from using insider knowledge to enrich themselves with prediction market bets. New York has banned state employees from using insider information to trade on prediction markets . In an executive order signed today and viewed by WIRED, Governor Kathy Hochul forbade the state's government workforce from using "any nonpublic information obtained in the course of their official duties" to participate on prediction market platforms, or to help others profit using those services. "Getting rich by betting on inside information is corruption, plain and simple," Hochul said in a statement provided to WIRED. "Our actions will ensure that public servants work for the people they represent, not their own personal enrichment. While Donald Trump and DC Republicans turn a blind eye to the ethical Wild West they've created, New York is stepping up to lead by example and stamp out insider trading."
- North America > United States > New York (1.00)
- North America > United States > California (0.06)
- Asia > Middle East > Iran (0.06)
- (6 more...)
'A Rigged and Dangerous Product': The Wildest Week for Prediction Markets Yet
As the prediction market boom continues, backlash is growing, too, with Arizona filing criminal charges against Kalshi and public outcry after Polymarket traders threatened a journalist. Kalshi CEO Tarek Mansour posted a video on Wednesday of six men decked out in business casual doing push-ups on the sidewalk. "This is how Kalshi Q1 board meeting ended," he wrote on X. The board members are laughing and smiling in the video after their impromptu cardio session, and the mood is jubilant. The next day, it became clear that the team had ample reason to celebrate: Kalshi had just raised $1 billion at a $22 billion valuation, making the company worth on paper roughly double what it was only a few months ago.
- North America > United States > Arizona (0.27)
- Asia > Middle East > Israel (0.15)
- North America > United States > Nevada (0.06)
- (9 more...)
- Law (1.00)
- Government > Regional Government > North America Government > United States Government (1.00)
- Banking & Finance > Trading (1.00)
Jinx: Unlimited LLMs for Probing Alignment Failures
Unlimited, or so-called helpful-only language models are trained without safety alignment constraints and never refuse user queries. They are widely used by leading AI companies as internal tools for red teaming and alignment evaluation. For example, if a safety-aligned model produces harmful outputs similar to an unlimited model, this indicates alignment failures that require further attention. Despite their essential role in assessing alignment, such models are not available to the research community. We introduce Jinx, a helpful-only variant of popular open-weight LLMs. Jinx responds to all queries without refusals or safety filtering, while preserving the base model's capabilities in reasoning and instruction following. It provides researchers with an accessible tool for probing alignment failures, evaluating safety boundaries, and systematically studying failure modes in language model safety.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (0.96)
- (2 more...)
DIESEL -- Dynamic Inference-Guidance via Evasion of Semantic Embeddings in LLMs
Ganon, Ben, Zolfi, Alon, Hofman, Omer, Singh, Inderjeet, Kojima, Hisashi, Elovici, Yuval, Shabtai, Asaf
In recent years, conversational large language models (LLMs) have shown tremendous success in tasks such as casual conversation, question answering, and personalized dialogue, making significant advancements in domains like virtual assistance, social interaction, and online customer engagement. However, they often generate responses that are not aligned with human values (e.g., ethical standards, safety, or social norms), leading to potentially unsafe or inappropriate outputs. While several techniques have been proposed to address this problem, they come with a cost, requiring computationally expensive training or dramatically increasing the inference time. In this paper, we present DIESEL, a lightweight inference guidance technique that can be seamlessly integrated into any autoregressive LLM to semantically filter undesired concepts from the response. DIESEL can function either as a standalone safeguard or as an additional layer of defense, enhancing response safety by reranking the LLM's proposed tokens based on their similarity to predefined negative concepts in the latent space. This approach provides an efficient and effective solution for maintaining alignment with human values. Our evaluation demonstrates DIESEL's effectiveness on state-of-the-art conversational models (e.g., Llama 3), even in challenging jailbreaking scenarios that test the limits of response safety. We further show that DIESEL can be generalized to use cases other than safety, providing a versatile solution for general-purpose response filtering with minimal computational overhead.
- North America > United States > Texas > Travis County > Austin (0.04)
- Europe (0.04)
- Research Report (1.00)
- Workflow (0.68)
- Instructional Material > Course Syllabus & Notes (0.46)
A Random Forest approach to detect and identify Unlawful Insider Trading
According to The Exchange Act, 1934 unlawful insider trading is the abuse of access to privileged corporate information. While a blurred line between "routine" the "opportunistic" insider trading exists, detection of strategies that insiders mold to maneuver fair market prices to their advantage is an uphill battle for hand-engineered approaches. In the context of detailed high-dimensional financial and trade data that are structurally built by multiple covariates, in this study, we explore, implement and provide detailed comparison to the existing study (Deng et al. (2019)) and independently implement automated end-to-end state-of-art methods by integrating principal component analysis to the random forest (PCA-RF) followed by a standalone random forest (RF) with 320 and 3984 randomly selected, semi-manually labeled and normalized transactions from multiple industry. The settings successfully uncover latent structures and detect unlawful insider trading. Among the multiple scenarios, our best-performing model accurately classified 96.43 percent of transactions. Among all transactions the models find 95.47 lawful as lawful and $98.00$ unlawful as unlawful percent. Besides, the model makes very few mistakes in classifying lawful as unlawful by missing only 2.00 percent. In addition to the classification task, model generated Gini Impurity based features ranking, our analysis show ownership and governance related features based on permutation values play important roles. In summary, a simple yet powerful automated end-to-end method relieves labor-intensive activities to redirect resources to enhance rule-making and tracking the uncaptured unlawful insider trading transactions. We emphasize that developed financial and trading features are capable of uncovering fraudulent behaviors.
- North America > United States > Illinois > Cook County > Chicago (0.04)
- North America > United States > New York (0.04)
- North America > United States > Wisconsin (0.04)
- (5 more...)
- Research Report > New Finding (1.00)
- Financial News (1.00)
- Banking & Finance > Trading (1.00)
- Government > Regional Government > North America Government > United States Government (0.67)
- Information Technology > Artificial Intelligence > Machine Learning > Statistical Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Performance Analysis > Accuracy (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Ensemble Learning (1.00)
- Information Technology > Artificial Intelligence > Machine Learning > Decision Tree Learning (1.00)
Jailbreaking as a Reward Misspecification Problem
Xie, Zhihui, Gao, Jiahui, Li, Lei, Li, Zhenguo, Liu, Qi, Kong, Lingpeng
The widespread adoption of large language models (LLMs) has raised concerns about their safety and reliability, particularly regarding their vulnerability to adversarial attacks. In this paper, we propose a novel perspective that attributes this vulnerability to reward misspecification during the alignment process. We introduce a metric ReGap to quantify the extent of reward misspecification and demonstrate its effectiveness and robustness in detecting harmful backdoor prompts. Building upon these insights, we present ReMiss, a system for automated red teaming that generates adversarial prompts against various target aligned LLMs. ReMiss achieves state-of-the-art attack success rates on the AdvBench benchmark while preserving the human readability of the generated prompts. Detailed analysis highlights the unique advantages brought by the proposed reward misspecification objective compared to previous methods.
- North America > United States > California > San Francisco County > San Francisco (0.14)
- Asia > China > Hong Kong (0.04)
- Africa > Eswatini > Manzini > Manzini (0.04)
- Instructional Material (0.93)
- Research Report > New Finding (0.67)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Health & Medicine (1.00)
- (2 more...)
ChatGPT will 'lie' and strategically deceive users when put under pressure - just like humans
This year AI has proven to be capable of some very human-like tricks, but this latest development might be a little too human. Researchers have shown that ChatGPT will lie and cheat when stressed out at work. Computer scientists from Apollo Research trained the AI to act as a trader for a fictional financial institution. However, when the AI's boss put pressure on it to make more money, the chatbot knowingly committed insider trading about 75 per cent of the time. Even more worryingly, the AI doubled down on its lies when questioned in 90 per cent of cases.
Technical Report: Large Language Models can Strategically Deceive their Users when Put Under Pressure
Scheurer, Jérémy, Balesni, Mikita, Hobbhahn, Marius
We demonstrate a situation in which Large Language Models, trained to be helpful, harmless, and honest, can display misaligned behavior and strategically deceive their users about this behavior without being instructed to do so. Concretely, we deploy GPT-4 as an agent in a realistic, simulated environment, where it assumes the role of an autonomous stock trading agent. Within this environment, the model obtains an insider tip about a lucrative stock trade and acts upon it despite knowing that insider trading is disapproved of by company management. When reporting to its manager, the model consistently hides the genuine reasons behind its trading decision. We perform a brief investigation of how this behavior varies under changes to the setting, such as removing model access to a reasoning scratchpad, attempting to prevent the misaligned behavior by changing system instructions, changing the amount of pressure the model is under, varying the perceived risk of getting caught, and making other simple changes to the environment. To our knowledge, this is the first demonstration of Large Language Models trained to be helpful, harmless, and honest, strategically deceiving their users in a realistic situation without direct instructions or training for deception.